270 research outputs found
Short-Term Plasticity and Long-Term Potentiation in Magnetic Tunnel Junctions: Towards Volatile Synapses
Synaptic memory is considered to be the main element responsible for learning
and cognition in humans. Although traditionally non-volatile long-term
plasticity changes have been implemented in nanoelectronic synapses for
neuromorphic applications, recent studies in neuroscience have revealed that
biological synapses undergo meta-stable volatile strengthening followed by a
long-term strengthening provided that the frequency of the input stimulus is
sufficiently high. Such "memory strengthening" and "memory decay"
functionalities can potentially lead to adaptive neuromorphic architectures. In
this paper, we demonstrate the close resemblance of the magnetization dynamics
of a Magnetic Tunnel Junction (MTJ) to short-term plasticity and long-term
potentiation observed in biological synapses. We illustrate that, in addition
to the magnitude and duration of the input stimulus, frequency of the stimulus
plays a critical role in determining long-term potentiation of the MTJ. Such
MTJ synaptic memory arrays can be utilized to create compact, ultra-fast and
low power intelligent neural systems.Comment: The article will appear in a future issue of Physical Review Applie
Encoding Neural and Synaptic Functionalities in Electron Spin: A Pathway to Efficient Neuromorphic Computing
Present day computers expend orders of magnitude more computational resources
to perform various cognitive and perception related tasks that humans routinely
perform everyday. This has recently resulted in a seismic shift in the field of
computation where research efforts are being directed to develop a
neurocomputer that attempts to mimic the human brain by nanoelectronic
components and thereby harness its efficiency in recognition problems. Bridging
the gap between neuroscience and nanoelectronics, this paper attempts to
provide a review of the recent developments in the field of spintronic device
based neuromorphic computing. Description of various spin-transfer torque
mechanisms that can be potentially utilized for realizing device structures
mimicking neural and synaptic functionalities is provided. A cross-layer
perspective extending from the device to the circuit and system level is
presented to envision the design of an All-Spin neuromorphic processor enabled
with on-chip learning functionalities. Device-circuit-algorithm co-simulation
framework calibrated to experimental results suggest that such All-Spin
neuromorphic systems can potentially achieve almost two orders of magnitude
energy improvement in comparison to state-of-the-art CMOS implementations.Comment: The paper will appear in a future issue of Applied Physics Review
Hybrid Spintronic-CMOS Spiking Neural Network With On-Chip Learning: Devices, Circuits and Systems
Over the past decade Spiking Neural Networks (SNN) have emerged as one of the
popular architectures to emulate the brain. In SNN, information is temporally
encoded and communication between neurons is accomplished by means of spikes.
In such networks, spike-timing dependent plasticity mechanisms require the
online programming of synapses based on the temporal information of spikes
transmitted by spiking neurons. In this work, we propose a spintronic synapse
with decoupled spike transmission and programming current paths. The spintronic
synapse consists of a ferromagnet-heavy metal heterostructure where programming
current through the heavy metal generates spin-orbit torque to modulate the
device conductance. Low programming energy and fast programming times
demonstrate the efficacy of the proposed device as a nanoelectronic synapse. We
perform a simulation study based on an experimentally benchmarked
device-simulation framework to demonstrate the interfacing of such spintronic
synapses with CMOS neurons and learning circuits operating in transistor
sub-threshold region to form a network of spiking neurons that can be utilized
for pattern recognition problems.Comment: The article will appear in a future issue of Physical Review Applie
Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets
Non-Boolean computing based on emerging post-CMOS technologies can
potentially pave the way for low-power neural computing platforms. However,
existing work on such emerging neuromorphic architectures have either focused
on solely mimicking the neuron, or the synapse functionality. While memristive
devices have been proposed to emulate biological synapses, spintronic devices
have proved to be efficient at performing the thresholding operation of the
neuron at ultra-low currents. In this work, we propose an All-Spin Artificial
Neural Network where a single spintronic device acts as the basic building
block of the system. The device offers a direct mapping to synapse and neuron
functionalities in the brain while inter-layer network communication is
accomplished via CMOS transistors. To the best of our knowledge, this is the
first demonstration of a neural architecture where a single nanoelectronic
device is able to mimic both neurons and synapses. The ultra-low voltage
operation of low resistance magneto-metallic neurons enables the low-voltage
operation of the array of spintronic synapses, thereby leading to ultra-low
power neural architectures. Device-level simulations, calibrated to
experimental results, was used to drive the circuit and system level
simulations of the neural network for a standard pattern recognition problem.
Simulation studies indicate energy savings by ~ 100x in comparison to a
corresponding digital/ analog CMOS neuron implementation.Comment: The article will appear in a future issue of IEEE Transactions on
Biomedical Circuits and System
Conditional Deep Learning for Energy-Efficient and Enhanced Pattern Recognition
Deep learning neural networks have emerged as one of the most powerful
classification tools for vision related applications. However, the
computational and energy requirements associated with such deep nets can be
quite high, and hence their energy-efficient implementation is of great
interest. Although traditionally the entire network is utilized for the
recognition of all inputs, we observe that the classification difficulty varies
widely across inputs in real-world datasets; only a small fraction of inputs
require the full computational effort of a network, while a large majority can
be classified correctly with very low effort. In this paper, we propose
Conditional Deep Learning (CDL) where the convolutional layer features are used
to identify the variability in the difficulty of input instances and
conditionally activate the deeper layers of the network. We achieve this by
cascading a linear network of output neurons for each convolutional layer and
monitoring the output of the linear network to decide whether classification
can be terminated at the current stage or not. The proposed methodology thus
enables the network to dynamically adjust the computational effort depending
upon the difficulty of the input data while maintaining competitive
classification accuracy. We evaluate our approach on the MNIST dataset. Our
experiments demonstrate that our proposed CDL yields 1.91x reduction in average
number of operations per input, which translates to 1.84x improvement in
energy. In addition, our results show an improvement in classification accuracy
from 97.5% to 98.9% as compared to the original network.Comment: 6 pages, 10 figures, 2 algorithms < Accepted for Design and
Automation Test in Europe (DATE) conference, 2016
TraNNsformer: Neural network transformation for memristive crossbar based neuromorphic system design
Implementation of Neuromorphic Systems using post Complementary
Metal-Oxide-Semiconductor (CMOS) technology based Memristive Crossbar Array
(MCA) has emerged as a promising solution to enable low-power acceleration of
neural networks. However, the recent trend to design Deep Neural Networks
(DNNs) for achieving human-like cognitive abilities poses significant
challenges towards the scalable design of neuromorphic systems (due to the
increase in computation/storage demands). Network pruning [7] is a powerful
technique to remove redundant connections for designing optimally connected
(maximally sparse) DNNs. However, such pruning techniques induce irregular
connections that are incoherent to the crossbar structure. Eventually they
produce DNNs with highly inefficient hardware realizations (in terms of area
and energy). In this work, we propose TraNNsformer - an integrated training
framework that transforms DNNs to enable their efficient realization on
MCA-based systems. TraNNsformer first prunes the connectivity matrix while
forming clusters with the remaining connections. Subsequently, it retrains the
network to fine tune the connections and reinforce the clusters. This is done
iteratively to transform the original connectivity into an optimally pruned and
maximally clustered mapping. Without accuracy loss, TraNNsformer reduces the
area (energy) consumption by 28% - 55% (49% - 67%) with respect to the original
network. Compared to network pruning, TraNNsformer achieves 28% - 49% (15% -
29%) area (energy) savings. Furthermore, TraNNsformer is a technology-aware
framework that allows mapping a given DNN to any MCA size permissible by the
memristive technology for reliable operations.Comment: (8 pages, 9 figures) Published in Computer-Aided Design (ICCAD), 2017
IEEE/ACM International Conference o
An All-Memristor Deep Spiking Neural Computing System: A Step Towards Realizing the Low Power,Stochastic Brain
Deep 'Analog Artificial Neural Networks' (ANNs) perform complex
classification problems with remarkably high accuracy. However, they rely on
humongous amount of power to perform the calculations, veiling the accuracy
benefits. The biological brain on the other hand is significantly more powerful
than such networks and consumes orders of magnitude less power, indicating us
about some conceptual mismatch. Given that the biological neurons communicate
using energy efficient trains of spikes, and the behavior is non-deterministic,
incorporating these effects in Deep Artificial Neural Networks may drive us few
steps towards a more realistic neuron. In this work, we propose how the
inherent stochasticity of nano-scale resistive devices can be harnessed to
emulate the functionality of a spiking neuron that can be incorporated in deep
stochastic Spiking Neural Networks (SNN). At the algorithmic level, we propose
how the training can be modified to convert an ANN to an SNN while supporting
the stochastic activation function offered by these devices. We devise circuit
architectures to incorporate stochastic memristive neurons along with
memristive crossbars which perform the functionality of the synaptic weights.
We tested the proposed All Memristor deep stochastic SNN for image
classification and observed only about 1% degradation in accuracy with the ANN
baseline after incorporating the circuit and device related non-idealities. We
witnessed that the network is robust to certain variations and consumes ~ 6.4x
less energy than its CMOS counterpart.Comment: In IEEE Transactions on Emerging Topics in Computational Intelligenc
RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars
Resistive crossbars designed with non-volatile memory devices have emerged as
promising building blocks for Deep Neural Network (DNN) hardware, due to their
ability to compactly and efficiently realize vector-matrix multiplication
(VMM), the dominant computational kernel in DNNs. However, a key challenge with
resistive crossbars is that they suffer from a range of device and circuit
level non-idealities such as interconnect parasitics, peripheral circuits,
sneak paths, and process variations. These non-idealities can lead to errors in
VMMs, eventually degrading the DNN's accuracy. It is therefore critical to
study the impact of crossbar non-idealities on the accuracy of large-scale
DNNs. However, this is challenging because existing device and circuit models
are too slow to use in application-level evaluations.
We present RxNN, a fast and accurate simulation framework to evaluate
large-scale DNNs on resistive crossbar systems. RxNN splits and maps the
computations involved in each DNN layer into crossbar operations, and evaluates
them using a Fast Crossbar Model (FCM) that accurately captures the errors
arising due to crossbar non-idealities while being four-to-five orders of
magnitude faster than circuit simulation. FCM models a crossbar-based VMM
operation using three stages - non-linear models for the input and output
peripheral circuits (DACs and ADCs), and an equivalent non-ideal conductance
matrix for the core crossbar array. We implement RxNN by extending the Caffe
machine learning framework and use it to evaluate a suite of six large-scale
DNNs developed for the ImageNet Challenge. Our experiments reveal that
resistive crossbar non-idealities can lead to significant accuracy degradations
(9.6%-32%) for these large-scale DNNs. To the best of our knowledge, this work
is the first quantitative evaluation of the accuracy of large-scale DNNs on
resistive crossbar based hardware.Comment: 13 pages, 16 figures, Accepted in IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems (TCAD) 202
Probabilistic Deep Spiking Neural Systems Enabled by Magnetic Tunnel Junction
Deep Spiking Neural Networks are becoming increasingly powerful tools for
cognitive computing platforms. However, most of the existing literature on such
computing models are developed with limited insights on the underlying hardware
implementation, resulting in area and power expensive designs. Although several
neuromimetic devices emulating neural operations have been proposed recently,
their functionality has been limited to very simple neural models that may
prove to be inefficient at complex recognition tasks. In this work, we venture
into the relatively unexplored area of utilizing the inherent device
stochasticity of such neuromimetic devices to model complex neural
functionalities in a probabilistic framework in the time domain. We consider
the implementation of a Deep Spiking Neural Network capable of performing high
accuracy and low latency classification tasks where the neural computing unit
is enabled by the stochastic switching behavior of a Magnetic Tunnel Junction.
Simulation studies indicate an energy improvement of over a baseline
CMOS design in technology.Comment: The article will appear in a future issue of IEEE Transactions on
Electron Device
Toward Fast Neural Computing using All-Photonic Phase Change Spiking Neurons
The rapid growth of brain-inspired computing coupled with the inefficiencies
in the CMOS implementations of neuromrphic systems has led to intense
exploration of efficient hardware implementations of the functional units of
the brain, namely, neurons and synapses. However, efforts have largely been
invested in implementations in the electrical domain with potential limitations
of switching speed, packing density of large integrated systems and
interconnect losses. As an alternative, neuromorphic engineering in the
photonic domain has recently gained attention. In this work, we demonstrate a
purely photonic operation of an Integrate-and-Fire Spiking neuron, based on the
phase change dynamics of GeSbTe (GST) embedded on top of a
microring resonator, which alleviates the energy constraints of PCMs in
electrical domain. We also show that such a neuron can be potentially
integrated with on-chip synapses into an all-Photonic Spiking Neural network
inferencing framework which promises to be ultrafast and can potentially offer
a large operating bandwidth.Comment: 10 pages, 5 figures, accepted in Nature Scientific Report
- β¦